430 research outputs found

    Im2Pano3D: Extrapolating 360 Structure and Semantics Beyond the Field of View

    Full text link
    We present Im2Pano3D, a convolutional neural network that generates a dense prediction of 3D structure and a probability distribution of semantic labels for a full 360 panoramic view of an indoor scene when given only a partial observation (<= 50%) in the form of an RGB-D image. To make this possible, Im2Pano3D leverages strong contextual priors learned from large-scale synthetic and real-world indoor scenes. To ease the prediction of 3D structure, we propose to parameterize 3D surfaces with their plane equations and train the model to predict these parameters directly. To provide meaningful training supervision, we use multiple loss functions that consider both pixel level accuracy and global context consistency. Experiments demon- strate that Im2Pano3D is able to predict the semantics and 3D structure of the unobserved scene with more than 56% pixel accuracy and less than 0.52m average distance error, which is significantly better than alternative approaches.Comment: Video summary: https://youtu.be/Au3GmktK-S

    Rearrangement Planning for General Part Assembly

    Full text link
    Most successes in autonomous robotic assembly have been restricted to single target or category. We propose to investigate general part assembly, the task of creating novel target assemblies with unseen part shapes. As a fundamental step to a general part assembly system, we tackle the task of determining the precise poses of the parts in the target assembly, which we we term ``rearrangement planning''. We present General Part Assembly Transformer (GPAT), a transformer-based model architecture that accurately predicts part poses by inferring how each part shape corresponds to the target shape. Our experiments on both 3D CAD models and real-world scans demonstrate GPAT's generalization abilities to novel and diverse target and part shapes.Comment: Project website: https://general-part-assembly.github.io

    Matterport3D: Learning from RGB-D Data in Indoor Environments

    Full text link
    Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification

    Ultra-Diffuse Galaxies as Extreme Star-forming Environments I: Mapping Star Formation in HI-Rich UDGs

    Full text link
    Ultra-Diffuse Galaxies are both extreme products of galaxy evolution and extreme environments in which to test our understanding of star formation. In this work, we contrast the spatially resolved star formation activity of a sample of 22 HI-selected UDGs and 35 low-mass galaxies from the NASA Sloan Atlas (NSA) within 120 Mpc. We employ a new joint SED fitting method to compute star formation rate and stellar mass surface density maps that leverage the high spatial resolution optical imaging data of the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) and the UV coverage of GALEX, along with HI radial profiles estimated from a subset of galaxies that have spatially resolved HI maps. We find that the UDGs have low star formation efficiencies as a function of their atomic gas down to scales of 500 pc. We additionally find that the stellar mass-weighted sizes of our UDG sample are unremarkable when considered as a function of their HI mass -- their stellar sizes are comparable to the NSA dwarfs at fixed HI mass. This is a natural result in the picture where UDGs are forming stars normally, but at low efficiencies. We compare our results to predictions from contemporary models of galaxy formation, and find in particular that our observations are difficult to reproduce in models where UDGs undergo stellar expansion due to vigorous star formation feedback should bursty star formation be required down to z=0z=0.Comment: Accepted to ApJ, 27 pages, 18 figure
    corecore